Clustered Vehicular Federated Learning: Process and Optimization

نویسندگان

چکیده

Federated Learning (FL) is expected to play a prominent role for privacy-preserving machine learning (ML) in autonomous vehicles. FL involves the collaborative training of single ML model among edge devices on their distributed datasets while keeping data locally. While requires less communication compared classical learning, it remains hard scale large models. In vehicular networks, must be adapted limited resources, mobility nodes, and statistical heterogeneity distributions. Indeed, judicious utilization resources alongside new perceptive learning-oriented methods are vital. To this end, we propose architecture corresponding scheduling processes. The utilizes vehicular-to-vehicular(V2V) bypass bottleneck where clusters vehicles train models simultaneously only aggregate each cluster sent multi-access (MEC) server. formation multi-task takes into account both aspects. We show through simulations that proposed process capable improving accuracy several non-independent and-identically-distributed (non-i.i.d) unbalanced distributions, under constraints, comparison standard FL.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic, Distributed and Federated Optimization for Machine Learning

We study optimization algorithms for the finite sum problems frequently arising in machine learning applications. First, we propose novel variants of stochastic gradient descent with a variance reduction property that enables linear convergence for strongly convex objectives. Second, we study distributed setting, in which the data describing the optimization problem does not fit into a single c...

متن کامل

Federated Optimization: Distributed Machine Learning for On-Device Intelligence

We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimi...

متن کامل

Clustered Multi-Task Learning Via Alternating Structure Optimization

Multi-task learning (MTL) learns multiple related tasks simultaneously to improve generalization performance. Alternating structure optimization (ASO) is a popular MTL method that learns a shared low-dimensional predictive structure on hypothesis spaces from multiple related tasks. It has been applied successfully in many real world applications. As an alternative MTL approach, clustered multi-...

متن کامل

Federated Optimization: Distributed Optimization Beyond the Datacenter

We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed (unevenly) over an extremely large number of nodes, but the goal remains to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of utmost importance. A ...

متن کامل

Entity Resolution and Federated Learning get a Federated Resolution

Consider two data providers, each maintaining records of different feature sets about common entities. They aim to learn a linear model over the whole set of features. This problem of federated learning over vertically partitioned data includes a crucial upstream issue: entity resolution, i.e. finding the correspondence between the rows of the datasets. It is well known that entity resolution, ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Intelligent Transportation Systems

سال: 2022

ISSN: ['1558-0016', '1524-9050']

DOI: https://doi.org/10.1109/tits.2022.3149860